73 research outputs found

    Inferences on median failure time for censored survival data

    Get PDF
    In this thesis two approaches of inferences on median failure times are considered and developed to compare the difference of median failure times between two groups of censored survival data.The first one is to generalize the Mood's median test - which is designed to deal with complete data - to censored survival data. To this end, two groups of censored survival data are pooled and then the estimated pooled median failure time is obtained from the product-limit method. A score is assigned for each observation to indicate the probability whether it survives after the pooled median failure time or not and for each group the scores are summed to summarize the number of observations whose survival time is larger than or equal to pooled median survival time, which results in a 2x2 contingency table with non-integer entries. Four 2x2 contingency tables with integer entries are then derived and a test statistic is defined as the weighted sum of the statistics from the four 2x2 contingency tables which is shown to be approximately distributed as chi-square distribution with 1 degree of freedom for large samples.The second approach is proposed to construct a 95% confidence interval for the difference of median failure times between two groups of censored survival distributions. Since the median failure time is approximately normally distributed for large samples, the estimated median failure times for each group are obtained by product-limit method and their standard errors are computed through bootstrap samples from the original data. Theory of construction for 95% confidence interval for the difference of median failure times is investigated for the standard normal distributions and it can be used for general normal distributions by translation and rescaling.Extensive numerical studies are carried out to test the appropriateness of the two approaches and the results show that the approaches developed in the thesis are easy to implement and the results are promising, compared to the results from published papers. The proposed methods will facilitate more accurate analysis of survival data under censoring, which are commonly collected from clinical studies that influence public health

    Investigations on genomic meta-analysis: imputation for incomplete data and properties of adaptively weighted fisher's method

    Get PDF
    Microarray analysis to monitor expression activities in thousands of genes simultaneously has become a routine experiment in biomedical research during the past decade. The microarray expression data generated by high throughput experiments may consist thousands variables and therefore pose great challenges to the researchers in a wide variety of objectives. A commonly encountered problem by researchers is to detect genes differentially expressed between two or more conditions and is the major concern of this thesis. In the first part of the thesis, we consider imputation of incomplete data in transcriptomic meta-analysis. In the past decade, a tremendous amount of expression profiles are generated and stored in the public domain and information integration by meta-analysis to detect differentially expressed (DE) genes has become popular to obtain increased statistical power and validated findings. Methods that combine p-values have been widely used in such a genomic setting, among which the Fisher's,Stouffer's, minP and maxP methods are the most popular ones. In practice, raw data or p-values of DE evidence of the entire genome are often not available in genomic studies to be combined. Instead, only the detected DE gene lists under certain p-value threshold (e.g. DE genes with p-value< 0.001) are reported in journal publications. The truncated p-value information voided the aforementioned meta-analysis methods and researchers are forced to apply less efficient vote counting method or naively drop the studies with incomplete information. In the thesis, effective imputation methods were derived for such situations with partially censored p-values. We developed and compared three imputation methods (mean imputation, single random imputation and multiple imputation) for a general class of evidence aggregation methods of which Fisher, Stouffer and logit methods are special examples. The null distribution of each method was analytically derived and subsequent inference and genomic analysis framework were established. Simulations were performed to investigate the type I error and power for univariate case and the control of false discovery rate (FDR) for (correlated) gene expression data. The proposed methods were also applied to several genomic applications in prostate cancer, major depressive disorder MDD), colorectal cancer and pain Research. In the second part, we investigate statistical properties of adaptively weighted (AW) Fisher's method. The traditional Fisher's method assigns equal weights to each study, which are simple in nature but can not always achieve high power for a variety of alternative hypothesis settings. Intuitively more weights should be assigned to the studies with high power to detect the difference between different conditions. The AW-Fisher's method, where the best binary 0=1 weights were determined by minimizing the p-value of the weighted test statistics. By using the order statistics technique, the searching space for adaptive weights reduces to linear complexity instead of exponential, which reduced the computational complexity dramatically, and a close form was derived to compute the p-values for K = 2, and an importance sampling algorithm was proposed to evaluate the p-values for K>2. Some theoretical properties of the AW-Fisher's method such as consistency and asymptotical Bahadur optimality (ABO) have also been investigated. Simulations will be performed to verify the asymptotical Bahadur optimality of the AW-Fisher and compare the performance of AW-Fisher and Fisher's methods. Meta-analysis of multiple genomic studies increases the statistical power of biomarker detection and therefore the work in this thesis could improve public health by providing more effective methodologies for biomarker detection in the integration of multiple genomic studies when the information is incomplete or when different hypothesis settings are tested

    Performance enhancement of the soft robotic segment for a trunk-like arm

    Get PDF
    Introduction: Trunk-like continuum robots have wide applications in manipulation and locomotion. In particular, trunk-like soft arms exhibit high dexterity and adaptability very similar to the creatures of the natural world. However, owing to the continuum and soft bodies, their performance in payload and spatial movements is limited.Methods: In this paper, we investigate the influence of key design parameters on robotic performance. It is verified that a larger workspace, lateral stiffness, payload, and bending moment could be achieved with adjustments to soft materials’ hardness, the height of module segments, and arrayed radius of actuators.Results: Especially, a 55% increase in arrayed radius would enhance the lateral stiffness by 25% and a bending moment by 55%. An 80% increase in segment height would enlarge 112% of the elongation range and 70 % of the bending range. Around 200% and 150% increments in the segment’s lateral stiffness and payload forces, respectively, could be obtained by tuning the hardness of soft materials. These relations enable the design customization of trunk-like soft arms, in which this tapering structure ensures stability via the stocky base for an impact reduction of 50% compared to that of the tip and ensures dexterity of the long tip for a relatively larger bending range of over 400% compared to that of the base.Discussion: The complete methodology of the design concept, analytical models, simulation, and experiments is developed to offer comprehensive guidelines for trunk-like soft robotic design and enable high performance in robotic manipulation

    A unified way of analyzing some greedy algorithms

    Full text link
    A unified way of analyzing different greedy-type algorithms in Banach spaces is presented. We define a class of Weak Biorthogonal Greedy Algorithms and prove convergence and rate of convergence results for algorithms from this class. In particular, the following well known algorithms --- Weak Chebyshev Greedy Algorithm and Weak Greedy Algorithm with Free Relaxation --- belong to this class. We consider here one more algorithm --- Rescaled Weak Relaxed Greedy Algorithm --- from the above class. We also discuss modifications of these algorithms, which are motivated by applications. We analyze convergence and rate of convergence of the algorithms under assumption that we may perform steps of these algorithms with some errors. We call such algorithms approximate greedy algorithms. We prove convergence and rate of convergence results for the Approximate Weak Biorthogonal Greedy Algorithms. These results guarantee stability of Weak Biorthogonal Greedy Algorithms

    MRI Lesion Load of Cerebral Small Vessel Disease and Cognitive Impairment in Patients With CADASIL

    Get PDF
    Background and objective: Cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is the best known and the most common monogenic small vessel disease (SVD). Cognitive impairment is an inevitable feature of CADASIL. Total SVD score and global cortical atrophy (GCA) scale were found to be good predictors of poor cognitive performance in community-dwelling adults. We aimed to estimate the association between the total SVD score, GCA scale and the cognitive performance in patients with CADASIL.Methods: We enrolled 20 genetically confirmed CADASIL patients and 20 controls matched by age, gender, and years of education. All participants underwent cognitive assessments to rate the global cognition and individual domain of executive function, information processing speed, memory, language, and visuospatial function. The total SVD score and GCA scale were rated.Results: The CADASIL group performed worse than the controls on all cognition measures. Neither global cognition nor any separate domain of cognition was significantly different among patients grouped by total SVD score. Negative correlations between the GCA score and cognitive performance were observed. Approximately 40% of the variance was explained by the total GCA score in the domains of executive function, information processing speed, and language. The superficial atrophy score was associated with poor performance in most of the domains of cognition. Adding the superficial atrophy score decreased the prediction power of the deep atrophy score on cognitive impairment alone.Conclusions: The GCA score, not the total SVD score, was significantly associated with poor cognitive performance in patients with CADASIL. Adding the superficial atrophy score attenuated the prediction power of the deep atrophy score on cognitive impairment alone

    Pan-Cancer Analysis of lncRNA Regulation Supports Their Targeting of Cancer Genes in Each Tumor Context

    Get PDF
    Long noncoding RNAs (lncRNAs) are commonly dys-regulated in tumors, but only a handful are known toplay pathophysiological roles in cancer. We inferredlncRNAs that dysregulate cancer pathways, onco-genes, and tumor suppressors (cancer genes) bymodeling their effects on the activity of transcriptionfactors, RNA-binding proteins, and microRNAs in5,185 TCGA tumors and 1,019 ENCODE assays.Our predictions included hundreds of candidateonco- and tumor-suppressor lncRNAs (cancerlncRNAs) whose somatic alterations account for thedysregulation of dozens of cancer genes and path-ways in each of 14 tumor contexts. To demonstrateproof of concept, we showed that perturbations tar-geting OIP5-AS1 (an inferred tumor suppressor) andTUG1 and WT1-AS (inferred onco-lncRNAs) dysre-gulated cancer genes and altered proliferation ofbreast and gynecologic cancer cells. Our analysis in-dicates that, although most lncRNAs are dysregu-lated in a tumor-specific manner, some, includingOIP5-AS1, TUG1, NEAT1, MEG3, and TSIX, synergis-tically dysregulate cancer pathways in multiple tumorcontexts

    Pan-cancer Alterations of the MYC Oncogene and Its Proximal Network across the Cancer Genome Atlas

    Get PDF
    Although theMYConcogene has been implicated incancer, a systematic assessment of alterations ofMYC, related transcription factors, and co-regulatoryproteins, forming the proximal MYC network (PMN),across human cancers is lacking. Using computa-tional approaches, we define genomic and proteo-mic features associated with MYC and the PMNacross the 33 cancers of The Cancer Genome Atlas.Pan-cancer, 28% of all samples had at least one ofthe MYC paralogs amplified. In contrast, the MYCantagonists MGA and MNT were the most frequentlymutated or deleted members, proposing a roleas tumor suppressors.MYCalterations were mutu-ally exclusive withPIK3CA,PTEN,APC,orBRAFalterations, suggesting that MYC is a distinct onco-genic driver. Expression analysis revealed MYC-associated pathways in tumor subtypes, such asimmune response and growth factor signaling; chro-matin, translation, and DNA replication/repair wereconserved pan-cancer. This analysis reveals insightsinto MYC biology and is a reference for biomarkersand therapeutics for cancers with alterations ofMYC or the PMN
    • …
    corecore